概率预测包括基于过去观察的未来结果的概率分布组成。在气象中,运行基于物理的数值模型的集合以获得此类分发。通常,使用评分规则,预测分配的功能和观察结果进行评估。通过一些评分规则,可以同时评估预测的校准和清晰度。在深度学习中,生成神经网络参数化在高维空间上的分布,并通过从潜变量转换绘制来轻松允许采样。条件生成网络另外限制输入变量上的分布。在此稿件中,我们使用培训的条件生成网络执行概率预测,以最小化评分规则值。与生成的对抗网络(GANS)相比,不需要鉴别者,培训是稳定的。我们对两种混沌模型进行实验和天气观测的全球数据集;结果令人满意,更好地校准而不是由GANS实现的。
translated by 谷歌翻译
Deep neural networks have emerged as the workhorse for a large section of robotics and control applications, especially as models for dynamical systems. Such data-driven models are in turn used for designing and verifying autonomous systems. This is particularly useful in modeling medical systems where data can be leveraged to individualize treatment. In safety-critical applications, it is important that the data-driven model is conformant to established knowledge from the natural sciences. Such knowledge is often available or can often be distilled into a (possibly black-box) model $M$. For instance, the unicycle model for an F1 racing car. In this light, we consider the following problem - given a model $M$ and state transition dataset, we wish to best approximate the system model while being bounded distance away from $M$. We propose a method to guarantee this conformance. Our first step is to distill the dataset into few representative samples called memories, using the idea of a growing neural gas. Next, using these memories we partition the state space into disjoint subsets and compute bounds that should be respected by the neural network, when the input is drawn from a particular subset. This serves as a symbolic wrapper for guaranteed conformance. We argue theoretically that this only leads to bounded increase in approximation error; which can be controlled by increasing the number of memories. We experimentally show that on three case studies (Car Model, Drones, and Artificial Pancreas), our constrained neurosymbolic models conform to specified $M$ models (each encoding various constraints) with order-of-magnitude improvements compared to the augmented Lagrangian and vanilla training methods.
translated by 谷歌翻译
This paper is a technical overview of DeepMind and Google's recent work on reinforcement learning for controlling commercial cooling systems. Building on expertise that began with cooling Google's data centers more efficiently, we recently conducted live experiments on two real-world facilities in partnership with Trane Technologies, a building management system provider. These live experiments had a variety of challenges in areas such as evaluation, learning from offline data, and constraint satisfaction. Our paper describes these challenges in the hope that awareness of them will benefit future applied RL work. We also describe the way we adapted our RL system to deal with these challenges, resulting in energy savings of approximately 9% and 13% respectively at the two live experiment sites.
translated by 谷歌翻译
Aspect Based Sentiment Analysis is a dominant research area with potential applications in social media analytics, business, finance, and health. Prior works in this area are primarily based on supervised methods, with a few techniques using weak supervision limited to predicting a single aspect category per review sentence. In this paper, we present an extremely weakly supervised multi-label Aspect Category Sentiment Analysis framework which does not use any labelled data. We only rely on a single word per class as an initial indicative information. We further propose an automatic word selection technique to choose these seed categories and sentiment words. We explore unsupervised language model post-training to improve the overall performance, and propose a multi-label generator model to generate multiple aspect category-sentiment pairs per review sentence. Experiments conducted on four benchmark datasets showcase our method to outperform other weakly supervised baselines by a significant margin.
translated by 谷歌翻译
The SNMMI Artificial Intelligence (SNMMI-AI) Summit, organized by the SNMMI AI Task Force, took place in Bethesda, MD on March 21-22, 2022. It brought together various community members and stakeholders from academia, healthcare, industry, patient representatives, and government (NIH, FDA), and considered various key themes to envision and facilitate a bright future for routine, trustworthy use of AI in nuclear medicine. In what follows, essential issues, challenges, controversies and findings emphasized in the meeting are summarized.
translated by 谷歌翻译
Existing regulations prohibit model developers from accessing protected attributes (gender, race, etc.), often resulting in fairness assessments on populations without knowing their protected groups. In such scenarios, institutions often adopt a separation between the model developers (who train models with no access to the protected attributes) and a compliance team (who may have access to the entire dataset for auditing purpose). However, the model developers might be allowed to test their models for bias by querying the compliance team for group fairness metrics. In this paper, we first demonstrate that simply querying for fairness metrics, such as statistical parity and equalized odds can leak the protected attributes of individuals to the model developers. We demonstrate that there always exist strategies by which the model developers can identify the protected attribute of a targeted individual in the test dataset from just a single query. In particular, we show that one can reconstruct the protected attributes of all the individuals from O(Nk log n/Nk) queries when Nk<<n using techniques from compressed sensing (n: size of the test dataset, Nk: size of smallest group). Our results pose an interesting debate in algorithmic fairness: should querying for fairness metrics be viewed as a neutral-valued solution to ensure compliance with regulations? Or, does it constitute a violation of regulations and privacy if the number of queries answered is enough for the model developers to identify the protected attributes of specific individuals? To address this supposed violation, we also propose Attribute-Conceal, a novel technique that achieves differential privacy by calibrating noise to the smooth sensitivity of our bias query, outperforming naive techniques such as Laplace mechanism. We also include experimental results on the Adult dataset and synthetic data (broad range of parameters).
translated by 谷歌翻译
已经开发了增强学习(RL)技术来优化工业冷却系统,与传统的启发式政策相比,提供了可观的节能。工业控制中的一个主要挑战涉及由于机械限制而在现实世界中可行的学习行为。例如,某些操作只能每隔几个小时执行一次,而其他动作可以更频繁地采取。如果没有广泛的奖励工程和实验,RL代理可能无法学习机械的现实操作。为了解决这个问题,我们使用层次结构的增强学习与多种根据操作时间尺度控制动作子集的代理。我们的分层方法可以在现有基线上节省能源,同时在模拟的HVAC控制环境中保持在安全范围内的限制(例如操作冷却器)。
translated by 谷歌翻译
与训练数据中心的训练传统机器学习(ML)模型相反,联合学习(FL)训练ML模型,这些模型在资源受限的异质边缘设备上包含的本地数据集上。现有的FL算法旨在为所有参与的设备学习一个单一的全球模型,这对于所有参与培训的设备可能没有帮助,这是由于整个设备的数据的异质性。最近,Hanzely和Richt \'{A} Rik(2020)提出了一种新的配方,以培训个性化的FL模型,旨在平衡传统的全球模型与本地模型之间的权衡,该模型可以使用其私人数据对单个设备进行培训只要。他们得出了一种称为无环梯度下降(L2GD)的新算法,以解决该算法,并表明该算法会在需要更多个性化的情况下,可以改善沟通复杂性。在本文中,我们为其L2GD算法配备了双向压缩机制,以进一步减少本地设备和服务器之间的通信瓶颈。与FL设置中使用的其他基于压缩的算法不同,我们的压缩L2GD算法在概率通信协议上运行,在概率通信协议中,通信不会按固定的时间表进行。此外,我们的压缩L2GD算法在没有压缩的情况下保持与香草SGD相似的收敛速率。为了验证算法的效率,我们在凸和非凸问题上都进行了多种数值实验,并使用各种压缩技术。
translated by 谷歌翻译
Smart Sensing提供了一种更轻松,方便的数据驱动机制,用于在建筑环境中监视和控制。建筑环境中生成的数据对隐私敏感且有限。 Federated Learning是一个新兴的范式,可在多个参与者之间提供隐私的合作,以进行模型培训,而无需共享私人和有限的数据。参与者数据集中的嘈杂标签降低了表现,并增加了联合学习收敛的通信巡回赛数量。如此大的沟通回合需要更多的时间和精力来训练模型。在本文中,我们提出了一种联合学习方法,以抑制每个参与者数据集中嘈杂标签的不平等分布。该方法首先估计每个参与者数据集的噪声比,并使用服务器数据集将噪声比归一化。所提出的方法可以处理服务器数据集中的偏差,并最大程度地减少其对参与者数据集的影响。接下来,我们使用每个参与者的归一化噪声比和影响来计算参与者的最佳加权贡献。我们进一步得出表达式,以估计提出方法收敛所需的通信回合数。最后,实验结果证明了拟议方法对现有技术的有效性,从交流回合和在建筑环境中实现了性能。
translated by 谷歌翻译
过去,现实世界中社交网络的图表错过了两个重要元素:连接的多重性和表示时间。为此,在本文中,我们为社交网络提供了一个新的动态异质图表示,其中包括图形的每个组件中的时间,即节点和边缘,每种捕获异质性的不同类型。我们通过提出四个与时间有关的查询和深度学习问题来说明这种表示的力量,这些查询和深度学习问题无法轻易在常规的均匀图表中处理。作为概念的证明,我们介绍了新的社交媒体平台(Steemit)的详细表示,我们用它来说明动态查询功能以及使用图形神经网络(GNNS)的预测任务。结果说明了动态异质图表示对社交网络的模型的力量。鉴于这是一个相对研究的领域,我们还说明了在查询优化方面的未来工作以及异质图结构的新动态预测任务的机会。
translated by 谷歌翻译